Surprising properties of dropout in deep networks
نویسندگان
چکیده
We analyze dropout in deep networks with rectified linear units and the quadratic loss. Our results expose surprising differences between the behavior of dropout and more traditional regularizers like weight decay. For example, on some simple data sets dropout training produces negative weights even though the output is the sum of the inputs. This provides a counterpoint to the suggestion that dropout discourages co-adaptation of weights. We also show that the dropout penalty can grow exponentially in the depth of the network while the weight-decay penalty remains essentially linear, and that dropout is insensitive to various re-scalings of the input features, outputs, and network weights. This last insensitivity implies that there are no isolated local minima of the dropout training criterion. Our work uncovers new properties of dropout, extends our understanding of why dropout succeeds, and lays the foundation for further progress.
منابع مشابه
Modified Dropout for Training Neural Network
Dropout is a method that prevents overfitting when training deep neural networks. It involves sampling different sub-networks by temporarily removing nodes at random. Dropout works well in practice, but its properties have not been fully explored or theoretically justified. Our project explores the properties of dropout by applying methods used in optimization such as simulated annealing and lo...
متن کاملUnderstanding Dropout
Dropout is a relatively new algorithm for training neural networks which relies on stochastically “dropping out” neurons during training in order to avoid the co-adaptation of feature detectors. We introduce a general formalism for studying dropout on either units or connections, with arbitrary probability values, and use it to analyze the averaging and regularizing properties of dropout in bot...
متن کاملImproving Deep Neural Networks with Probabilistic Maxout Units
We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to ch...
متن کاملDropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Deep learning has gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. We show that dropout in neural networks (NNs) can be cast as a Bayesian approximation....
متن کاملDropout as a Bayesian Approximation: Appendix
We show that a neural network with arbitrary depth and non-linearities, with dropout applied before every weight layer, is mathematically equivalent to an approximation to a well known Bayesian model. This interpretation might offer an explanation to some of dropout’s key properties, such as its robustness to overfitting. Our interpretation allows us to reason about uncertainty in deep learning...
متن کامل